4 research outputs found

    On-Device Deep Learning Inference for System-on-Chip (SoC) Architectures

    Get PDF
    As machine learning becomes ubiquitous, the need to deploy models on real-time, embedded systems will become increasingly critical. This is especially true for deep learning solutions, whose large models pose interesting challenges for target architectures at the “edge” that are resource-constrained. The realization of machine learning, and deep learning, is being driven by the availability of specialized hardware, such as system-on-chip solutions, which provide some alleviation of constraints. Equally important, however, are the operating systems that run on this hardware, and specifically the ability to leverage commercial real-time operating systems which, unlike general purpose operating systems such as Linux, can provide the low-latency, deterministic execution required for embedded, and potentially safety-critical, applications at the edge. Despite this, studies considering the integration of real-time operating systems, specialized hardware, and machine learning/deep learning algorithms remain limited. In particular, better mechanisms for real-time scheduling in the context of machine learning applications will prove to be critical as these technologies move to the edge. In order to address some of these challenges, we present a resource management framework designed to provide a dynamic on-device approach to the allocation and scheduling of limited resources in a real-time processing environment. These types of mechanisms are necessary to support the deterministic behavior required by the control components contained in the edge nodes. To validate the effectiveness of our approach, we applied rigorous schedulability analysis to a large set of randomly generated simulated task sets and then verified the most time critical applications, such as the control tasks which maintained low-latency deterministic behavior even during off-nominal conditions. The practicality of our scheduling framework was demonstrated by integrating it into a commercial real-time operating system (VxWorks) then running a typical deep learning image processing application to perform simple object detection. The results indicate that our proposed resource management framework can be leveraged to facilitate integration of machine learning algorithms with real-time operating systems and embedded platforms, including widely-used, industry-standard real-time operating systems

    Large-Scale Identification and Analysis of Factors Impacting Simple Bug Resolution Times in Open Source Software Repositories

    Get PDF
    One of the most prominent issues the ever-growing open-source software community faces is the abundance of buggy code. Well-established version control systems and repository hosting services such as GitHub and Maven provide a checks-and-balances structure to minimize the amount of buggy code introduced. Although these platforms are effective in mitigating the problem, it still remains. To further the efforts toward a more effective and quicker response to bugs, we must understand the factors that affect the time it takes to fix one. We apply a custom traversal algorithm to commits made for open source repositories to determine when “simple stupid bugs” were first introduced to projects and explore the factors that drive the time it takes to fix them. Using the commit history from the main development branch, we are able to identify the commit that first introduced 13 different types of simple stupid bugs in 617 of the top Java projects on GitHub. Leveraging a statistical survival model and other non-parametric statistical tests, we found that there were two main categories of categorical variables that affect a bug’s life; Time Factors and Author Factors. We find that bugs are fixed quicker if they are introduced and resolved by the same developer. Further, we discuss how the day of the week and time of day a buggy code was written and fixed affects its resolution time. These findings will provide vital insight to help the open-source community mitigate the abundance of code and can be used in future research to aid in bug-finding programs

    Exploring Behaviors of Software Developers and Their Code Through Computational and Statistical Methods

    No full text
    As Artificial Intelligence (AI) increasingly penetrates all aspects of society, many obstacles emerge. This thesis identifies and discusses the issues facing Computer Vision and significant deficiencies in the Software Development Life-cycle that need to be resolved to facilitate the evolution toward true artificial intelligence. We explicitly review the concepts behind Convolutional Neural Network (CNN) models, the benchmark for computer vision. Chapter 2 highlights the mechanisms that have popularized CNNs while also specifying significant gaps that could garner the model inadequate for future use in safety-critical systems. We put forward two main limitations. Namely, CNNs do not use lack of information as information, something that humans intuitively do. Further, the process of training puts CNNs at a disadvantage over the human brain and experience. While CNNs are trained by directly introducing entire concepts, people acquire knowledge incrementally. Because we learn the basics before the details, we are able to analyze imagery more expeditiously and successfully. Another obstacle limiting the progression of Artificial Intelligence is the factors that decrease the efficiency of programmers and their code. Chapters 3 and 4 explore some of these aspects. The analysis of computer code as a field of study has recently been getting more attention. However, most of this research focuses on the code as the artifact. This dissertation takes a different position by centering on the effects of computer programmers\u27 behaviors on their code. We explore the practicality of current in-code documentation practices by assessing whether erroneous computer comments can mislead programmers. Our experimental model relies on asking participants to describe the functionality of code fragments which, unbeknownst to them, have misleading comments. We find that expert programmers are more prone to rely on these erroneous explanations and further visualize the phenomenon by using an eye-tracker. Finally, we present heatmaps that show experts concentrated more time and attention on the documentation than the code. In the next chapter, we conduct a large-scale analysis of the circumstances determining how long a bug will be present in a project. This study looks at 63,923 simple, one-statement fixes of bugs in open-source Java projects. We conclude that bugs fixed by the same programmer that introduced them are resolved 1.71 times faster. We also challenge Linus\u27 law which states, given enough eyeballs, all bugs are shallow. Our analysis finds that projects with more contributors have significantly longer-lived bugs, indicating that the mere number of people working on a project is not sufficient protection against bugs

    On-Device Deep Learning Inference for System-on-Chip (SoC) Architectures

    No full text
    As machine learning becomes ubiquitous, the need to deploy models on real-time, embedded systems will become increasingly critical. This is especially true for deep learning solutions, whose large models pose interesting challenges for target architectures at the “edge” that are resource-constrained. The realization of machine learning, and deep learning, is being driven by the availability of specialized hardware, such as system-on-chip solutions, which provide some alleviation of constraints. Equally important, however, are the operating systems that run on this hardware, and specifically the ability to leverage commercial real-time operating systems which, unlike general purpose operating systems such as Linux, can provide the low-latency, deterministic execution required for embedded, and potentially safety-critical, applications at the edge. Despite this, studies considering the integration of real-time operating systems, specialized hardware, and machine learning/deep learning algorithms remain limited. In particular, better mechanisms for real-time scheduling in the context of machine learning applications will prove to be critical as these technologies move to the edge. In order to address some of these challenges, we present a resource management framework designed to provide a dynamic on-device approach to the allocation and scheduling of limited resources in a real-time processing environment. These types of mechanisms are necessary to support the deterministic behavior required by the control components contained in the edge nodes. To validate the effectiveness of our approach, we applied rigorous schedulability analysis to a large set of randomly generated simulated task sets and then verified the most time critical applications, such as the control tasks which maintained low-latency deterministic behavior even during off-nominal conditions. The practicality of our scheduling framework was demonstrated by integrating it into a commercial real-time operating system (VxWorks) then running a typical deep learning image processing application to perform simple object detection. The results indicate that our proposed resource management framework can be leveraged to facilitate integration of machine learning algorithms with real-time operating systems and embedded platforms, including widely-used, industry-standard real-time operating systems
    corecore